96 research outputs found

    Image blur estimation based on the average cone of ratio in the wavelet domain

    Get PDF
    In this paper, we propose a new algorithm for objective blur estimation using wavelet decomposition. The central idea of our method is to estimate blur as a function of the center of gravity of the average cone ratio (ACR) histogram. The key properties of ACR are twofold: it is powerful in estimating local edge regularity, and it is nearly insensitive to noise. We use these properties to estimate the blurriness of the image, irrespective of the level of noise. In particular, the center of gravity of the ACR histogram is a blur metric. The method is applicable both in case where the reference image is available and when there is no reference. The results demonstrate a consistent performance of the proposed metric for a wide class of natural images and in a wide range of out of focus blurriness. Moreover, the proposed method shows a remarkable insensitivity to noise compared to other wavelet domain methods

    Quantitative Analysis of Ultrasound Images of the Preterm Brain

    Get PDF
    In this PhD new algorithms are proposed to better understand and diagnose white matter damage in the preterm Brain. Since Ultrasound imaging is the most suited modality for the inspection of brain pathologies in very low birth weight infants we propose multiple techniques to assist in what is called Computer-Aided Diagnosis. As a main result we are able to increase the qualitative diagnosis from a 70% detectability to a 98% quantitative detectability

    Channelized hotelling observers for signal detection in stack-mode reading of volumetric images on medical displays with slow response time

    Get PDF
    Volumetric medical images are commonly read in stack-browsing mode. However, previous studies suggest that slow temporal response of medical liquid crystal displays may degrade the diagnostic accuracy (lesion detectability) at browsing rates as low as 10 frames per second (fps). Recently, a multi-slice channelized Hotelling observer (msCHO) model was proposed to estimate the detection performance in 3D images. This implementation of the msCHO restricted the analysis to the luminance of a display pixel at the end of the frame time (end-of-frame luminance) while ignoring the luminance transition within the frame time (intra-frame luminance). Such an approach fails to differentiate between, for example, the commonly found case of two displays with different temporal profiles of luminance as long as their end-of-frame luminance levels are the same. In order to overcome this limitation of the msCHO, we propose a new upsampled msCHO (umsCHO) which acts on images obtained using both the intra-frame and the end-of-frame luminance information. The two models are compared on a set of synthesized 3D images for a range of browsing rates (16.67, 25 and 50 fps). Our results demonstrate that, depending on the details of the luminance transition profiles, neglecting the intra-frame luminance information may lead to over- or underestimation of lesion detectability. Therefore, we argue that using the umsCHO rather than msCHO model is more appropriate for estimating the detection performance in the stack-browsing mode

    The use of steerable channels for detecting asymmetrical signals with random orientations

    No full text
    In the optimization of medical imaging systems, there is a stringent need to shift from human observer studies to numerical observer studies, because of both cost and time limitations. Numerical models give an objective measure for the quality of displayed images for a given task and can be designed to predict the performance of medical specialists performing the same task. For the task of signal detection, the channelized Hotelling observer (CHO) has been successfully used, although several studies indicate an overefficiency of the CHO compared to human observers. One of the main causes of this overefficiency is attributed to the intrinsic uncertainty about the signal (such as its orientation) that a human observer is dealing with. Deeper knowledge of the discrepancies of the CHO and the human observer may provide extra insight in the processing of the human visual system and this knowledge can be utilized to better fine-tune medical imaging systems

    Effects of common image manipulations on diagnostic performance in digital pathology: human study

    Get PDF
    A very recent work of Ref.[1] studied the effects of image manipulation and image degradation on the perceived attributes of image quality (IQ) of digital pathology slides. However, before any conclusions and recommendations can be formulated regarding specific image manipulations (and IQ attributes), it is necessary to investigate their effects on the diagnostic performance of clinicians when interpreting these images. In this study, 6 expert pathologists interpreted digital images of H&E stained animal pathology samples in a free-response (FROC) experiment. Participants marked locations suspicious for viral inclusions (inclusion bodies) and rated them using a continuous scale from 0 (low confidence) to 100% (high confidence). The images were the same as in Ref.[1]: crops of digital pathology slides of 3 different animal tissue samples, all 1200Ă—750 pixels in size. Each participant viewed a total of 72 images: 12 nonmanipulated (reference) images (4 of each tissue type), and 60 manipulated images (5 for each reference image). The extent of artificial manipulations was adjusted relative to the reference images using the HDR-VDP metric [2] in the luminance domain: added Gaussian blur (sb=3), decreased gamma (-5%), added white Gaussian noise (sn=10), decreased color saturation (-5%), and JPG compression (libjpeg 50). The images were displayed on a 3MP medical color LCD in a controlled viewing environment. Preliminary analysis assessing the change in the number of positive markings in the reference and manipulated images indicates that blurring and changes in gamma, followed by changes in color saturation, could have an effect on diagnostic performance. This largely coincides with the findings from Ref.[1], where IQ ratings appeared to be most affected by changes in color and gamma parameters. Importantly, diagnostic performance appears to be content dependent; it is different across tissue types. Further data analysis (including JAFROC) is ongoing and shall be reported in the conference talk

    No-reference blur estimation based on the average cone ratio in the wavelet domain

    Get PDF
    We propose a wavelet based metric of blurriness in the digital images named CogACR – Center of gravity of the Average Cone Ratio. The metric is highly robust to noise and able to distinguish between a great range of blurriness. To automate the CogACR estimation of blur in a no-reference scenario, we introduce a novel method for image classification based on edge content similarity. Our results indicate high accuracy of the CogACR metric for a range of natural scene images distorted with the out-of-focus blur. Within the considered range of blur radius of 0 to 10 pixels, varied in steps of 0.25 pixels, the proposed metric estimates the blur radius with an absolute error of up to 1 pixel in 80 to 90% of the images

    An improved fuzzy clustering approach for image segmentation

    Get PDF
    Fuzzy clustering techniques have been widely used in automated image segmentation. However, since the standard fuzzy c-means (FCM) clustering algorithm does not consider any spatial information, it is highly sensitive to noise. In this paper, we present an extension of the FCM algorithm to overcome this drawback, by incorporating spatial neighborhood information into a new similarity measure. We consider that spatial information depends on the relative location and features of the neighboring pixels. The performance of the proposed algorithm is tested on synthetic and real images with different noise levels. Experimental quantitative and qualitative segmentation results show that the proposed method is effective, more robust to noise and preserves the homogeneity of the regions better than other FCM-based methods

    Analysing wear in carpets by detecting varying local binary patterns

    Get PDF
    Currently, carpet companies assess the quality of their products based on their appearance retention capabilities. For this, carpet samples with different degrees of wear after a traffic exposure simulation process are rated with wear labels by human experts. Experts compare changes in appearance in the worn samples to samples with original appearance. This process is subjective and humans can make mistakes up to 10% in rating. In search of an objective assessment, research using texture analysis has been conducted to automate the process. Particularly, Local Binary Pattern (LBP) technique combined with a Symmetric adaptation of the Kullback-Leibler divergence (SKL) are successful for extracting texture features related to the wear labels either from intensity and range images. We present in this paper a novel extension of the LBP techniques that improves the representation of the distinct wear labels. The technique consists in detecting those patters that monotonically change with the wear labels while grouping the others. Computing the SKL from these patters considerably increases the discrimination between the consecutive groups even for carpet types where other LBP variations fail. We present results for carpet types representing 72% of the existing references for the EN1471:1996 European standard

    Feature extraction of the wear label of carpets by using a novel 3D scanner

    Get PDF
    In the textile industry, the quality of carpets is still determined through visual assessment by human experts. Human assessment is somewhat subjective, so there is a need for a more objective assessment which yields to automated systems. However, existing computer models are at this moment not yet capable of matching the human expertise. Most attempts at automated assessment have focused on image analysis of two dimensional images of worn carpet. These do not adequately capture the three dimensional structure of the carpet that is also evaluated by the experts and the image processing is very dependent on the lighting conditions. One previous attempt however used a laser scanner to obtain three dimensional images of the carpet and process them for carpet assessment. This paper describes the development of a new scanner to acquire wear label characteristics in three dimensions based on a structured light pattern. Now an appropriate technique based on the local binary patterns (LBP) and the Kullback-Leibler divergence has been developed. We show that the new laser scanning system is less dependent on the lighting conditions and color of the carpet and obtains data points on a structured grid instead of sparse points. The new system is also more than five times cheaper, scans more than seven times faster and is specifically designed for scanning carpets instead of 3D objects. Previous attempts to classify the carpet wear were based on several extracted features. Only one of them - the height difference between worn and unworn part - showed a good correlation of 0.70 with the carpet wear label. However, experiments demonstrate that our approach - using the LBP technique - gives rise to promising results, with correlation factors from 0.89 to 0.99 between the Kullback-Leibler divergence and quality labels. This new laser scanner system is a significant step forward in the automated assessment of carpet wear using 3D images
    • …
    corecore